2024-08-07 11:24:58.AIbase.10.9k
Huawei and Fudan University Join Forces to Create a New Framework for 3D Digital Humans EmoTalk3D: Realistic and Rich Expressions of Joy, Anger, Sadness, and Happiness
In the field of 3D digital humans, research teams from Nanjing University, Fudan University, and Huawei Noah's Ark Lab have proposed innovative solutions to address the issues of multi-view consistency and insufficient emotional expressiveness. They developed the EmoTalk3D dataset, which includes calibrated multi-view videos, emotion annotations, and frame-by-frame 3D geometric information. By constructing a mapping framework from 'speech to geometry to appearance', the research team introduced a new method that enables the synthesis of 3D talking avatars with controllable emotions, significantly enhancing lip synchronization and rendering quality.